32 research outputs found

    SRDA-Net: Super-Resolution Domain Adaptation Networks for Semantic Segmentation

    Full text link
    Recently, Unsupervised Domain Adaptation was proposed to address the domain shift problem in semantic segmentation task, but it may perform poor when source and target domains belong to different resolutions. In this work, we design a novel end-to-end semantic segmentation network, Super-Resolution Domain Adaptation Network (SRDA-Net), which could simultaneously complete super-resolution and domain adaptation. Such characteristics exactly meet the requirement of semantic segmentation for remote sensing images which usually involve various resolutions. Generally, SRDA-Net includes three deep neural networks: a Super-Resolution and Segmentation (SRS) model focuses on recovering high-resolution image and predicting segmentation map; a pixel-level domain classifier (PDC) tries to distinguish the images from which domains; and output-space domain classifier (ODC) discriminates pixel label distributions from which domains. PDC and ODC are considered as the discriminators, and SRS is treated as the generator. By the adversarial learning, SRS tries to align the source with target domains on pixel-level visual appearance and output-space. Experiments are conducted on the two remote sensing datasets with different resolutions. SRDA-Net performs favorably against the state-of-the-art methods in terms of accuracy and visual quality. Code and models are available at https://github.com/tangzhenjie/SRDA-Net

    A Decoupled Calibration Method for Camera Intrinsic Parameters and Distortion Coefficients

    Get PDF
    Camera calibration is a necessary process in the field of vision measurement. In this paper, we propose a flexible and high-accuracy method to calibrate a camera. Firstly, we compute the center of radial distortion, which is important to obtain optimal results. Then, based on the radial distortion of the division model, the camera intrinsic parameters and distortion coefficients are solved in a linear way independently. Finally, the intrinsic parameters of the camera are optimized via the Levenberg-Marquardt algorithm. In the proposed method, the distortion coefficients and intrinsic parameters are successfully decoupled; calibration accuracy is further improved through the subsequent optimization process. Moreover, whether it is for relatively small image distortion or distortion larger image, utilizing our method can get a good result. Both simulation and real data experiment demonstrate the robustness and accuracy of the proposed method. Experimental results show that the proposed method can be obtaining a higher accuracy than the classical methods

    Small Object Detection Algorithm Based on Improved YOLOv8 for Remote Sensing

    No full text
    Due to the limitations of small targets in remote sensing images, such as background noise, poor information, and so on, the results of commonly used detection algorithms in small target detection is not satisfactory. To improve the accuracy of detection results, we develop an improved algorithm based on YOLOv8, called LAR-YOLOv8. First, in the feature extraction network, the local module is enhanced by using the dual-branch architecture attention mechanism, while the vision transformer block is used to maximize the representation of the feature map. Second, an attention-guided bidirectional feature pyramid network is designed to generate more discriminative information by efficiently extracting feature from the shallow network through a dynamic sparse attention mechanism, and adding top–down paths to guide the subsequent network modules for feature fusion. Finally, the RIOU loss function is proposed to avoid the failure of the loss function and improve the shape consistency between the predicted and ground-truth box. Experimental results on NWPU VHR-10, RSOD, and CARPK datasets verify that LAR-YOLOv8 achieves satisfactory results in terms of mAP (small), mAP, model parameters, and FPS, and can prove that our modifications made to the original YOLOv8 model are effective

    Saliency auxiliary objects for visual tracking

    No full text

    Low-Pass Parabolic FFT Filter for Airborne and Satellite Lidar Signal Processing

    No full text
    In order to reduce random errors of the lidar signal inversion, a low-pass parabolic fast Fourier transform filter (PFFTF) was introduced for noise elimination. A compact airborne Raman lidar system was studied, which applied PFFTF to process lidar signals. Mathematics and simulations of PFFTF along with low pass filters, sliding mean filter (SMF), median filter (MF), empirical mode decomposition (EMD) and wavelet transform (WT) were studied, and the practical engineering value of PFFTF for lidar signal processing has been verified. The method has been tested on real lidar signal from Wyoming Cloud Lidar (WCL). Results show that PFFTF has advantages over the other methods. It keeps the high frequency components well and reduces much of the random noise simultaneously for lidar signal processing

    Extrinsic Calibration for LiDAR–Camera Systems Using Direct 3D–2D Correspondences

    No full text
    Recent advances in the fields of driverless cars, intelligent robots and remote-sensing measurement have shown that the use of LiDAR fused with cameras can provide more comprehensive and reliable sensing of surroundings. However, since it is difficult to extract features from sparse LiDAR data to create 3D–2D correspondences, finding a method for accurate external calibration of all types of LiDAR with cameras has become a research hotspot. To solve this problem, this paper proposes a method to directly obtain the 3D–2D correspondences of LiDAR–camera systems to complete accurate calibration. In this method, a laser detector card is used as an auxiliary tool to directly obtain the correspondences between laser spots and image pixels, thus solving the problem of difficulty in extracting features from sparse LiDAR data. In addition, a two-stage framework from coarse to fine is designed in this paper, which not only can solve the perspective-n-point problem with observation errors, but also requires only four LiDAR data points and the corresponding pixel information for more accurate external calibration. Finally, extensive simulations and experimental results show that the effectiveness and accuracy of our method are better than existing methods

    Processing Centroids of Smearing Star Image of Star Sensor

    No full text
    A novel method was presented for increasing the accuracy of subpixel centroid estimation for smearing star image. Model of the smearing trajectory of smearing star was built. It helped to study the analytical form of the errors, caused by image smearing, for centroid estimation. In the algorithm, the errors were estimated with accuracy and used to revise the centroid processed by CoM (centre of mass). Simulations have been run to study the effect of angular rates, integration time, and actual position of star on the accuracy of centroid estimation. Results were presented which suggested that the proposed algorithm had a precision better than 1/10 of a pixel when the angular rate was up to 3.0 deg/s

    Model and performance analysis of coupled heat and moisture transfer of the roadbed slope

    No full text
    The soil samples of the roadbed slope of the Datong-Xi'an high-speed railway section are taken as the research object. The frost-heave property of the fiber soil of the roadbed slope under a single freeze-thaw cycle is studied, where the fiber's content is taken into account. The distributions of temperature and humidity fields inside the slope are numerically studied, and the coupling transfer of heat and moisture is revealed. The results are helpful for the optimal design of the roadbed slop

    Processing Centroids of Smearing Star Image of Star Sensor

    No full text
    A novel method was presented for increasing the accuracy of subpixel centroid estimation for smearing star image. Model of the smearing trajectory of smearing star was built. It helped to study the analytical form of the errors, caused by image smearing, for centroid estimation. In the algorithm, the errors were estimated with accuracy and used to revise the centroid processed by CoM (centre of mass). Simulations have been run to study the effect of angular rates, integration time, and actual position of star on the accuracy of centroid estimation. Results were presented which suggested that the proposed algorithm had a precision better than 1/10 of a pixel when the angular rate was up to 3.0 deg/s

    A Convenient Calibration Method for LRF-Camera Combination Systems Based on a Checkerboard

    No full text
    In this paper, a simple and easy high-precision calibration method is proposed for the LRF-camera combined measurement system which is widely used at present. This method can be applied not only to mainstream 2D and 3D LRF-cameras, but also to calibrate newly developed 1D LRF-camera combined systems. It only needs a calibration board to record at least three sets of data. First, the camera parameters and distortion coefficients are decoupled by the distortion center. Then, the spatial coordinates of laser spots are solved using line and plane constraints, and the estimation of LRF-camera extrinsic parameters is realized. In addition, we establish a cost function for optimizing the system. Finally, the calibration accuracy and characteristics of the method are analyzed through simulation experiments, and the validity of the method is verified through the calibration of a real system
    corecore